Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Decisions involving algorithmic rankings affect our lives in many ways, from product recommendations, receiving scholarships, to securing jobs. While tools have been developed for interactively constructing fair consensus rankings from a handful of rankings, addressing the more complex real‐world scenario— where diverse opinions are represented by a larger collection of rankings— remains a challenge. In this paper, we address these challenges by reformulating the exploration of rankings as a dimension reduction problem in a system called FairSpace. FairSpace provides new views, including Fair Divergence View and Cluster Views, by juxtaposing fairness metrics of different local and alternative global consensus rankings to aid ranking analysis tasks. We illustrate the effectiveness of FairSpace through a series of use cases, demonstrating via interactive workflows that users are empowered to create local consensuses by grouping rankings similar in their fairness or utility properties, followed by hierarchically aggregating local consensuses into a global consensus through direct manipulation. We discuss how FairSpace opens the possibility for advances in dimension reduction visualization to benefit the research area of supporting fair decision‐making in ranking based decision‐making contexts. Code, datasets and demo video available at:osf.io/d7cwkmore » « lessFree, publicly-accessible full text available June 1, 2026
-
Subset selection is an integral component of AI systems that is increasingly affecting people’s livelihoods in applications ranging from hiring, healthcare, education, to financial decisions. Subset selections powered by AI-based methods include top- analytics, data summarization, clustering, and multi-winner voting. While group fairness auditing tools have been proposed for classification systems, these state-of-the-art tools are not directly applicable to measuring and conceptualizing fairness in selected subsets. In this work, we introduce the first comprehensive auditing framework, FINS, to support stakeholders in interpretably quantifying group fairness across a diverse range of subset-specific fairness concerns. FINS offers a family of novel measures that provide a flexible means to audit group fairness for fairness goals ranging from item-based, score-based, and a combination thereof. FINS provides one unified easy-to-understand interpretation across these different fairness problems. Further, we develop guidelines through the FINS Fair Subset Chart, that supports auditors in determining which measures are relevant to their problem context and fairness objectives. We provide a comprehensive mapping between each fairness measure and the belief system (i.e., worldview) that is encoded within its measurement of fairness. Lastly, we demonstrate the interpretability and efficacy of FINS in supporting the identification of real bias with case studies using AirBnB listings and voter records.more » « less
-
Combining the preferences of many rankers into one single consensus ranking is critical for consequential applications from hiring and admissions to lending. While group fairness has been extensively studied for classification, group fairness in rankings and in particular rank aggregation remains in its infancy. Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute, i.e., they fall into two groups only. Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers while ensuring fair treatment for candidates with multiple protected attributes such as gender, race, and nationality. In this work, we are the first to define and solve this open Multi-attribute Fair Consensus Ranking (MFCR) problem. As a foundation, we design novel group fairness criteria for rankings, called MANI-Rank, ensuring fair treatment of groups defined by individual protected attributes and their intersection. Leveraging the MANI-Rank criteria, we develop a series of algorithms that for the first time tackle the MFCR problem. Our experimental study with a rich variety of consensus scenarios demonstrates our MFCR methodology is the only approach to achieve both intersectional and protected attribute fairness while also representing the preferences expressed through many base rankings. Our real-world case study on merit scholarships illustrates the effectiveness of our MFCR methods to mitigate bias across multiple protected attributes and their intersections.more » « less
An official website of the United States government

Full Text Available